Efficient use of hardware using AI
Reduction in the number of GPUs! Enables training of large networks without increasing the required number of GPUs.
There is a problem of insufficient GPU memory for training large deep learning networks, while GPUs are expensive compared to HDDs and DRAM, making expansion difficult. By utilizing other hardware, it is possible to train large networks using only a single GPU. 【Technical Details】 ■ Data transfer to HDD using CUDA Unified Memory - Users can transfer data from GPU memory to Host memory without being aware of the transfer. - A technology has been implemented to transfer data to HDD when Host memory is insufficient by extending the Nvidia Driver. ■ Analysis of computational graphs - Development of a technology that keeps only the necessary data in the GPU, moving the rest to Host memory or storage, and transferring it back to the GPU when it is likely to be needed for computation. *For more details, please download the PDF or feel free to contact us.
- 企業:スマートホールディングス
- 価格:Other